60 research outputs found

    Improving the exploitation of linguistic annotations in ELAN

    Get PDF
    This paper discusses some improvements in recent and planned versions of the multimodal annotation tool ELAN, which are targeted at improving the usability of annotated files. Increased support for multilingual documents is provided, by allowing for multilingual vocabularies and by specifying a language per document, annotation layer (tier) or annotation. In addition, improvements in the search possibilities and the display of the results have been implemented, which are especially relevant in the interpretation of the results of complex multi-tier searches

    Phonetic implementation of phonological categories in Sign Language of the Netherlands

    Get PDF
    This thesis describes several patterns of phonetic variation in Sign Language of the Netherlands. While lexical variation between different regions has been found in the Netherlands, little is known about phonetic or phonological variation. Phonetic variation in the realization of some of the traditional handshape and orientation features is analyzed in detail. Furthermore, data were elicited from different registers: short-distance signing (__whispering__) was compared to long-distance signing (__shouting__). Results show that differences between registers lead not only to variation in movement size, but also to changes in the traditional phonological categories. In enlarged realizations, as in shouting, handshape and orientation changes may be enhanced by a location change; in reduced forms, as in whispering, location changes may be realized as changes in orientation or handshape. While the distinction between the three parameters handshape, orientation and location remains valid, it is argued that their definition needs to be stated in global perceptual targets rather than in detailed articulatory terms in a comprehensive analysis of the various differences between registers. The data thus provide evidence for a strict separation of perceptual and articulatory characterizations of signs. The lexical specification contains only perceptual targets. The variation is thus not generated by a phonological process, but is a matter of phonetic implementation.UBL - phd migration 201

    Combining video and numeric data in the analysis of sign languages with the ELAN annotation software

    No full text
    This paper describes hardware and software that can be used for the phonetic study of sign languages. The field of sign language phonetics is characterised, and the hardware that is currently in use is described. The paper focuses on the software that was developed to enable the recording of finger and hand movement data, and the additions to the ELAN annotation software that facilitate the further visualisation and analysis of the data

    The perception of stroke-to-stroke turn boundaries in signed conversation

    No full text
    Speaker transitions in conversation are often brief, with minimal vocal overlap. Signed languages appear to defy this pattern with frequent, long spans of simultaneous signing. But recent evidence suggests that turn boundaries in signed language may only include the content-bearing parts of the turn (from the first stroke to the last), and not all turn-related movement (from first preparation to final retraction). We tested whether signers were able to anticipate “stroke-to-stroke” turn boundaries with only minimal conversational context. We found that, indeed, signers anticipated turn boundaries at the ends of turn-final strokes. Signers often responded early, especially when the turn was long or contained multiple possible end points. Early responses for long turns were especially apparent for interrogatives—long interrogative turns showed much greater anticipation compared to short ones

    Unsupervised feature learning for visual sign language identification

    Get PDF
    Prior research on language identification focused primarily on text and speech. In this paper, we focus on the visual modality and present a method for identifying sign languages solely from short video samples. The method is trained on unlabelled video data (unsupervised feature learning) and using these features, it is trained to discriminate between six sign languages (supervised learning). We ran experiments on video samples involving 30 signers (running for a total of 6 hours). Using leave-one-signer-out cross-validation, our evaluation on short video samples shows an average best accuracy of 84%. Given that sign languages are under-resourced, unsupervised feature learning techniques are the right tools and our results indicate that this is realistic for sign language identification

    BSL-1K: Scaling up co-articulated sign language recognition using mouthing cues

    Get PDF
    Recent progress in fine-grained gesture and action classification, and machine translation, point to the possibility of automated sign language recognition becoming a reality. A key stumbling block in making progress towards this goal is a lack of appropriate training data, stemming from the high complexity of sign annotation and a limited supply of qualified annotators. In this work, we introduce a new scalable approach to data collection for sign recognition in continuous videos. We make use of weakly-aligned subtitles for broadcast footage together with a keyword spotting method to automatically localise sign-instances for a vocabulary of 1,000 signs in 1,000 hours of video. We make the following contributions: (1) We show how to use mouthing cues from signers to obtain high-quality annotations from video data - the result is the BSL-1K dataset, a collection of British Sign Language (BSL) signs of unprecedented scale; (2) We show that we can use BSL-1K to train strong sign recognition models for co-articulated signs in BSL and that these models additionally form excellent pretraining for other sign languages and benchmarks - we exceed the state of the art on both the MSASL and WLASL benchmarks. Finally, (3) we propose new large-scale evaluation sets for the tasks of sign recognition and sign spotting and provide baselines which we hope will serve to stimulate research in this area.Comment: Appears in: European Conference on Computer Vision 2020 (ECCV 2020). 28 page
    • …
    corecore